51 research outputs found

    A technology-aided multi-modal training approach to assist abdominal palpation training and its assessment in medical education

    Get PDF
    Kinaesthetic Learning Activities (KLA) are techniques for enhancing the motor learning process to provide a deep understanding of fundamental skills in particular disciplines. With KLA learning takes place by carrying out a physical activity to transform empirical achievements into representative cognitive understanding. In disciplines such as medical education, frequent hands-on practice of certain motor skills plays a key role in the development of medical students' competency. Therefore it is essential that clinicians master these core skills early on in their educational journey as well as retain them for the entirety of their career. Transferring knowledge of performing dexterous motor skills, such as clinical examinations, from experts to novices demands a systematic approach to quantify relevant motor variables with the help of medical experts in order to form a reference best practice model for target skills. Additional information (augmented feedback) on certain aspects of movements could be extracted from this model and visualised via multi-modal sensory channels in order to enhance motor performance and learning processes. This thesis proposes a novel KLA methodology to significantly improve the quality of palpation training in medical students. In particular, it investigates whether it is possible to enhance the existing abdominal palpation skills acquisition process (motor performance and learning) with provision of instructional concurrent and terminal augmented feedback on applied forces by the learner's hand via an autonomous multimodal displays. This is achieved by considering the following: identifying key motor variables with help of medical experts; forming a gold standard model for target skills by collecting pre-defined motor variables with an innovative quantification technique; designing an assessment criteria by analysing the medical experts' data; and systematically evaluating the impact of instructional augmented feedback on medical students' motor performance with two distinct assessment approaches(a machine-based and a human-based). In addition, an evaluation of performance on a simpler task is carried out using a game-based training method, to compare feedback visualisation techniques, such as concurrent visual and auditory feedback as used in a serious games environment, with abstract visualisation of motor variables. A detailed between-participants study is presented to evaluate the effect of concurrent augmented feedback on participants' skills acquisition in the motor learning process. Significant improvement on medical students' motor performance was observed when augmented feedback on applied forces were visually presented (H(2) = 6:033, p < :05). Moreover, a positive correlation was reported between computer-generated scores and human-generated scores, r = :62, p (one-tailed) < :05. This indicates the potential of the computer-based assessment technique to assist the current assessment process in medical education. The same results were also achieved in a blind-folded (no-feedback) transfer test to evaluate performance and short-term retention of skills in the game-based training approach. The accuracy in the exerted target force for participants in the game-playing group, who were trained using the game approach (Mdn = 0:86), differed significantly from the participants in control group, who trained using the abstract visualisation of the exerted force value (Mdn = 1:56), U = 61, z = -2:137, p < :05, r = -0:36. Finally, the usability of both motor learning approaches were surveyed via feedback questionnaires and positive responses were achieved from users. The research presented shows that concurrent augmented feedback significantly improves the participants' motor control abilities. Furthermore, advanced visualisation techniques such as multi-modal displays increases the participants' motivation to engage in learning and to retain motor skills

    DeepMetricEye: Metric Depth Estimation in Periocular VR Imagery

    Full text link
    Despite the enhanced realism and immersion provided by VR headsets, users frequently encounter adverse effects such as digital eye strain (DES), dry eye, and potential long-term visual impairment due to excessive eye stimulation from VR displays and pressure from the mask. Recent VR headsets are increasingly equipped with eye-oriented monocular cameras to segment ocular feature maps. Yet, to compute the incident light stimulus and observe periocular condition alterations, it is imperative to transform these relative measurements into metric dimensions. To bridge this gap, we propose a lightweight framework derived from the U-Net 3+ deep learning backbone that we re-optimised, to estimate measurable periocular depth maps. Compatible with any VR headset equipped with an eye-oriented monocular camera, our method reconstructs three-dimensional periocular regions, providing a metric basis for related light stimulus calculation protocols and medical guidelines. Navigating the complexities of data collection, we introduce a Dynamic Periocular Data Generation (DPDG) environment based on UE MetaHuman, which synthesises thousands of training images from a small quantity of human facial scan data. Evaluated on a sample of 36 participants, our method exhibited notable efficacy in the periocular global precision evaluation experiment, and the pupil diameter measurement

    Finding building footprints in over-detailed topographic maps

    Get PDF
    Building footprints are a key component of many GIS applications, including morphological and street view based analysis. Crowdsourced data such as OpenStreetMap (OSM) is widespread but not consistently detailed enough to reliably extract footprints of individual buildings, while topographic building maps such as the Ordnance Survey MasterMap Topography Layer (MTL) may split footprints into multiple polygons or include constructions without an address. We propose a method to determine which topographic building polygons can be unambiguously matched to individual footprints of buildings with an address, to enable integration with address-based data sources such as transactions or energy performance certificates. The results suggest that this method recovers significantly more building footprints than what can be obtained from OSM

    Commercial and research-based wearable devices in spinal postural analysis: A systematic review

    Get PDF
    The widespread use of ubiquitous computing has led to people spending more time in front of screens, causing poor posture. The COVID-19 pandemic and the shift towards remote work have only worsened the situation, as many people are now working from home with inadequate ergonomics. Maintaining a healthy posture is crucial for both physical and mental health, and poor posture can result in spinal problems. Wearable systems have been developed to monitor posture and provide instant feedback. Their goal is to improve posture over time by using these devices. This article will review commercially available, and research-based wearable devices used to analyse posture. The potential of these devices in the healthcare industry, particularly in preventing, monitoring, and treating spinal and musculoskeletal conditions, will also be discussed. The findings indicate that current devices can accurately assess posture in clinical settings, but further research is needed to validate the long-term effectiveness of these technologies and to improve their practicality for commercial use

    RESenv: A Realistic Earthquake Simulation Environment based on Unreal Engine

    Full text link
    Earthquakes have a significant impact on societies and economies, driving the need for effective search and rescue strategies. With the growing role of AI and robotics in these operations, high-quality synthetic visual data becomes crucial. Current simulation methods, mostly focusing on single building damages, often fail to provide realistic visuals for complex urban settings. To bridge this gap, we introduce an innovative earthquake simulation system using the Chaos Physics System in Unreal Engine. Our approach aims to offer detailed and realistic visual simulations essential for AI and robotic training in rescue missions. By integrating real seismic waveform data, we enhance the authenticity and relevance of our simulations, ensuring they closely mirror real-world earthquake scenarios. Leveraging the advanced capabilities of Unreal Engine, our system delivers not only high-quality visualisations but also real-time dynamic interactions, making the simulated environments more immersive and responsive. By providing advanced renderings, accurate physical interactions, and comprehensive geological movements, our solution outperforms traditional methods in efficiency and user experience. Our simulation environment stands out in its detail and realism, making it a valuable tool for AI tasks such as path planning and image recognition related to earthquake responses. We validate our approach through three AI-based tasks: similarity detection, path planning, and image segmentation

    Language as Reality: A Co-Creative Storytelling Game Experience in 1001 Nights using Generative AI

    Full text link
    In this paper, we present "1001 Nights", an AI-native game that allows players lead in-game reality through co-created storytelling with the character driven by large language model. The concept is inspired by Wittgenstein's idea of the limits of one's world being determined by the bounds of their language. Using advanced AI tools like GPT-4 and Stable Diffusion, the second iteration of the game enables the protagonist, Shahrzad, to realize words and stories in her world. The player can steer the conversation with the AI King towards specific keywords, which then become battle equipment in the game. This blend of interactive narrative and text-to-image transformation challenges the conventional border between the game world and reality through a dual perspective. We focus on Shahrzad, who seeks to alter her fate compared to the original folklore, and the player, who collaborates with AI to craft narratives and shape the game world. We explore the technical and design elements of implementing such a game with an objective to enhance the narrative game genre with AI-generated content and to delve into AI-native gameplay possibilities

    A game-based training approach to enhance human hand motor learning and control abilities

    Get PDF
    This work presents a serious game designed to improve the performance in users’ control abilities applied to pressure sensitivity. In particular, the aim of this work is part of a larger goal of providing medical students with further opportunities of palpation experiences and assistance as part of their education. Typically medical students are limited by the number of volunteers they can practice on and the amount of time they can interact with more experienced practitioners to further develop fundamental palpation skills. Correct palpation skills are crucial as they inform the diagnosis in a large number of healthcare fields and a skill required by most healthcare professionals. The ability to be able to enhance the educational process of healthcare professionals’ palpation skills could lead to a more holistic student experience. This work presents a serious game in which one aspect of palpation, hand control ability through the correct application of pressure to a patient, is the target for user improvement. A serious game modelled on the infinite runner genre was designed to be controlled via an input device developed in-house with off-the-shelf components that translates real-world pressure to in-game movement. The game was tested in a participant trial involving a game-playing group (n = 15) and a control group (n = 15) and a significant improvement in a blind-folded pressure test was observed for the game-playing group. User feedback via a questionnaire also showed a positive response of the game

    Hyborg Agency: Cultivating conversational AI creatures through community connections

    Get PDF
    Hyborg Agency, an interactive art installation, integrates a digital forest with a human discord chat channel, enabling communication with AI deer or 'hyborgs'. The forest metaphorically represents human society, where AI agents use the large language model (LLM) ChatGPT to learn and grow by summarising conversations and updating their knowledge base, assimilating community thoughts as enriching nutrients

    A technology-aided multi-modal training approach to assist abdominal palpation training and its assessment in medical education

    Get PDF
    Computer-assisted multi-modal training is an effective way of learning complex motor skills in various applications. In particular disciplines (eg. healthcare) incompetency in performing dexterous hands-on examinations (clinical palpation) may result in misdiagnosis of symptoms, serious injuries or even death. Furthermore, a high quality clinical examination can help to exclude significant pathology, and reduce time and cost of diagnosis by eliminating the need for unnecessary medical imaging. Medical palpation is used regularly as an effective preliminary diagnosis method all around the world but years of training are required currently to achieve competency. This paper focuses on a multimodal palpation training system to teach and improve clinical examination skills in relation to the abdomen. It is our aim to shorten significantly the palpation training duration by increasing the frequency of rehearsals as well as providing essential augmented feedback on how to perform various abdominal palpation techniques which has been captured and modelled from medical experts. A comparative evaluation on usability and effectiveness of the method is presented in this study. Four professional tutors were invited to take part in the design, development and assessment stages of this study. Widely-used user centred design methods were employed to form a know-how document and an assessment criterion with help of medical professionals. Our interface was used to capture and develop a best practice model for each palpation tasks for further assessment. Twenty three first year medical students divided into a control group (n = 8), a semi-visually trained group (n = 8), and a fully visually trained group (n = 7) were invited to perform three palpation tasks (superficial, deep and liver). The medical students’ performances were assessed using both computer-based and human-based methods where a positive correlation was shown between the generated scores, r = .62, p (one-tailed) < .05. The visually-trained group significantly outperformed the control group in which abstract visualisation of applied forces and their palmar locations were provided to the students during each palpation examination (p < .05). Moreover, a positive trend was observed between groups when visual feedback was presented, J = 132, z = 2.62, r = 0.55
    • …
    corecore